Network Effects and A/B Testing: When Users Influence Each Other
Standard A/B tests break when users influence each other. Learn how network effects create interference and the experimental designs that handle it.
Articles exploring experiment design through the lens of behavioral science and experimentation. Practical frameworks for growth leaders who measure in revenue, not vanity metrics.
17 articles
Standard A/B tests break when users influence each other. Learn how network effects create interference and the experimental designs that handle it.
Master the 'design an A/B test' interview question with a structured framework. Learn the step-by-step approach that impresses hiring managers every time.
Pre-registration locks in your experiment plan before seeing results. Learn why it prevents p-hacking, metric shopping, and post-hoc rationalization.
When A/B tests track multiple metrics, statistical complexity increases. Learn frameworks for managing metric conflicts and making sound decisions.
Guardrail metrics prevent A/B tests from causing hidden damage. Learn how to set them up, monitor them, and use them to make better ship decisions.
Your primary metric determines whether an A/B test succeeds or fails. Learn how to select metrics that are sensitive, aligned, and actionable.
Learn how to design rigorous A/B tests from hypothesis to execution. Covers experiment structure, variable isolation, and common design mistakes.
Underpowered tests waste traffic, miss real wins, and erode trust in experimentation. Learn how to diagnose the problem and fix it before it kills your program.
MDE is the most important and least understood input to A/B test design. Learn how to set it based on business impact, traffic, and decision context.
Statistical power determines whether your A/B test can detect real effects. Most experiments run underpowered, wasting traffic and producing misleading results.
Running A/B tests without proper sample size calculation wastes traffic and produces unreliable results. Learn the inputs, formulas, and practical trade-offs.
Explore how AI and large language models are transforming A/B test hypothesis generation by eliminating confirmation bias, surfacing non-obvious patterns in historical data, and accelerating the path from insight to experiment.
Learn what statistical power means for A/B testing, why 80% is the standard, and how underpowered tests lead to costly false negatives that cause you to miss winning changes.
Master A/B test sample size calculation including the relationship between baseline conversion rate, minimum detectable effect, and statistical power to design reliable experiments.
A strong hypothesis is the difference between an experiment that teaches you something and one that wastes traffic. Learn the three-part structure, common pitfalls, and how to connect hypotheses to your research findings.
Anchoring bias silently distorts A/B test results by making the control variant the psychological reference point against which all alternatives are judged, reshaping how users perceive value and make decisions.
How CUPED reduces variance in A/B tests, shortens time-to-decision, and cuts down on inconclusive experiments by leveraging pre-experiment data.